A smoothed monotonic regression via L2 regularization
نویسندگان
چکیده
Monotonic Regression (MR) is a standard method for extracting a monotone function from non-monotonic data, and it is used in many applications. However, a known drawback of this method is that its fitted response is a piecewise constant function, while practical response functions are often required to be continuous. The method proposed in this paper achieves monotonicity and smoothness of the regression by introducing an L2 regularization term, and it is shown that the complexity of this method is O(n). In addition, our simulations demonstrate that the proposed method normally has higher predictive power than some commonly used alternative methods, such as monotonic kernel smoothers. In contrast to these methods, our approach is probabilistically motivated and has connections to Bayesian modeling.
منابع مشابه
A Dual Active-Set Algorithm for Regularized Monotonic Regression
Monotonic (isotonic) regression is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the monotonic regression, formulated as a least distance problem with monotonicity...
متن کاملOn the inductive bias of dropout
Dropout is a simple but effective technique for learning in neural networks and other settings. A sound theoretical understanding of dropout is needed to determine when dropout should be applied and how to use it most effectively. In this paper we continue the exploration of dropout as a regularizer pioneered by Wager et al. We focus on linear classification where a convex proxy to the misclass...
متن کاملSparsity enhanced spatial resolution and depth localization in diffuse optical tomography
In diffuse optical tomography (DOT), researchers often face challenges to accurately recover the depth and size of the reconstructed objects. Recent development of the Depth Compensation Algorithm (DCA) solves the depth localization problem, but the reconstructed images commonly exhibit over-smoothed boundaries, leading to fuzzy images with low spatial resolution. While conventional DOT solves ...
متن کاملEuclid in a Taxicab: Sparse Blind Deconvolution with Smoothed ℓ1/ℓ2 Regularization
The l1/l2 ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context. However, the l1/l2 function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of ...
متن کاملHigh dimensional thresholded regression and shrinkage effect
High dimensional sparse modelling via regularization provides a powerful tool for analysing large-scale data sets and obtaining meaningful interpretable models.The use of nonconvex penalty functions shows advantage in selecting important features in high dimensions, but the global optimality of such methods still demands more understanding.We consider sparse regression with a hard thresholding ...
متن کامل